A Note on Refinement Operators for IE-Based ILP Systems
نویسندگان
چکیده
ILP systems which use some form of Inverse Entailment (IE) are based on clause refinement through a hypotheses space bounded by a most specific clause. In this paper we give a new analysis of refinement operators in this setting. In particular, Progol’s refinement operator is revisited and discussed. It is known that Progol’s refinement operator is incomplete with respect to the general subsumption order. We introduce a subsumption order relative to a most specific (bottom) clause. This subsumption order, unlike previously suggested orders, characterises Progol’s refinement space. We study the properties of this subsumption order and show that ideal refinement operators exist for this order. It is shown that efficient operators can be implemented for least generalisation and greatest specialisation in the subsumption order relative to a bottom clause. We also study less restricted subsumption orders relative to a bottom clause and show how Progol’s incompleteness can be addressed.
منابع مشابه
Macro-Operators in Multirelational Learning: A Search-Space Reduction Technique
Refinement operators are frequently used in the area of multirelational learning (Inductive Logic Programming, ILP) in order to search systematically through a generality order on clauses for a correct theory. Only the clauses reachable by a finite number of applications of a refinement operator are considered by a learning system using this refinement operator; ie. the refinement operator dete...
متن کاملLogic-based machine learning using a bounded hypothesis space : the lattice structure, refinement operators and a genetic algorithm approach
Rich representation inherited from computational logic makes logic-based machine learning a competent method for application domains involving relational background knowledge and structured data. There is however a trade-off between the expressive power of the representation and the computational costs. Inductive Logic Programming (ILP) systems employ different kind of biases and heuristics to ...
متن کاملPerfect Refinement Operators can be Flexible
A (weakly) perfect ILP refinement operator was described in [1]. It’s main disadvantage however is that it is static and inflexible: for ensuring non-redundancy, some refinements of a hypothesis are disallowed in advance, regardless of the search heuristic which may recommend their immediate exploration. (Similar problems are faced by Progol and other complete and non-redundant systems). On the...
متن کاملStochastic Refinement
The research presented in this paper is motivated by the following question. How can the generality order of clauses and the relevant concepts such as refinement be adapted to be used in a stochastic search? To address this question we introduce the concept of stochastic refinement operators and adapt a framework, called stochastic refinement search. In this paper we introduce stochastic refine...
متن کاملGeneralizing Refinement Operators to Learn Prenex Conjunctive Normal Forms
Inductive Logic Programming considers almost exclusively universally quantied theories. To add expressiveness, prenex conjunctive normal forms (PCNF) with existential variables should also be considered. ILP mostly uses learning with refinement operators. To extend refinement operators to PCNF, we should first do so with substitutions. However, applying a classic substitution to a PCNF with exi...
متن کامل